546 research outputs found

    Teaching Parallel Programming Using Java

    Full text link
    This paper presents an overview of the "Applied Parallel Computing" course taught to final year Software Engineering undergraduate students in Spring 2014 at NUST, Pakistan. The main objective of the course was to introduce practical parallel programming tools and techniques for shared and distributed memory concurrent systems. A unique aspect of the course was that Java was used as the principle programming language. The course was divided into three sections. The first section covered parallel programming techniques for shared memory systems that include multicore and Symmetric Multi-Processor (SMP) systems. In this section, Java threads was taught as a viable programming API for such systems. The second section was dedicated to parallel programming tools meant for distributed memory systems including clusters and network of computers. We used MPJ Express-a Java MPI library-for conducting programming assignments and lab work for this section. The third and the final section covered advanced topics including the MapReduce programming model using Hadoop and the General Purpose Computing on Graphics Processing Units (GPGPU).Comment: 8 Pages, 6 figures, MPJ Express, MPI Java, Teaching Parallel Programmin

    Thoughts on the structure of an MPJ reference implementation

    Get PDF
    We sketch a proposed reference implementation for MPJ, the Java Grande Forum\u27s MPI-like message-passing API [9, 3]. The proposal relies heavily on RMI and Jini for finding computational resources, creating slave processes, and handling failures. User-level communication is implemented efficiently directly on top of Java sockets

    MPJ Express meets YARN:towards Java HPC on Hadoop systems

    Get PDF
    AbstractMany organizations—including academic, research, commercial institutions—have invested heavily in setting up High Performance Computing (HPC) facilities for running computational science applications. On the other hand, the Apache Hadoop software—after emerging in 2005— has become a popular, reliable, and scalable open-source framework for processing large-scale data (Big Data). Realizing the importance and significance of Big Data, an increasing number of organizations are investing in relatively cheaper Hadoop clusters for executing their mission critical data processing applications. An issue here is that system administrators at these sites might have to maintain two parallel facilities for running HPC and Hadoop computations. This, of course, is not ideal due to redundant maintenance work and poor economics. This paper attempts to bridge this gap by allowing HPC and Hadoop jobs to co-exist on a single hardware facility. We achieve this goal by exploiting YARN—Hadoop v2.0—that de-couples the computational and resource scheduling part of the Hadoop framework from HDFS. In this context, we have developed a YARN-based reference runtime system for the MPJ Express software that allows executing parallel MPI-like Java applications on Hadoop clusters. The main contribution of this paper is provide Big Data community access to MPI-like programming using MPJ Express. As an aside, this work allows parallel Java applications to perform computations on data stored in Hadoop Distributed File System (HDFS)

    An HPspmd Programming Model

    Get PDF
    Building on research carried out in the Parallel Compiler Runtime Consortium (PCRC) project, this article discusses a language model that combines characteristic data-parallel features from the HPF standard with an explicitly SPMD programming style. This model, which we call the HPspmd model, is designed to facilitate direct calls to established libraries for parallel programming with distributed data. We describe a Java-based HPspmd language called HPJava

    The Distributed Array Descriptor for a PCRC HPF Compiler Version 2.0 SCCS-770d

    Get PDF
    We describe a distributed array descriptor that can be used by a runtime supporting HPFlike compilers. This descriptor captures all five types of alignment and BLOCK and CYCLIC distribution as defined in HPF specification. In essence, this descriptor does not distinguish whole array and array sections. Prior to this version, we had versions 1.0, 1.1, and 1.2. This version is not only an update of previous versions, but more importantly it also directly reflects our current practice in an HPF compilation effort

    Collective Asynchronous Remote Invocation (CARI): A High-Level and Effcient Communication API for Irregular Applications

    Get PDF
    The Message Passing Interface (MPI) standard continues to dominate the landscape of parallel computing as the de facto API for writing large-scale scientific applications. But the critics argue that it is a low-level API and harder to practice than shared memory approaches. This paper addresses the issue of programming productivity by proposing a high-level, easy-to-use, and effcient programming API that hides and segregates complex low-level message passing code from the application specific code. Our proposed API is inspired by communication patterns found in Gadget-2, which is an MPI-based parallel production code for cosmological N-body and hydrodynamic simulations. In this paper—we analyze Gadget-2 with a view to understanding what high-level Single Program Multiple Data (SPMD) communication abstractions might be developed to replace the intricate use of MPI in such an irregular application—and do so without compromising the effciency. Our analysis revealed that the use of low-level MPI primitives—bundled with the computation code—makes Gadget-2 diffcult to understand and probably hard to maintain. In addition, we found out that the original Gadget-2 code contains a small handful of—complex and recurring—patterns of message passing. We also noted that these complex patterns can be reorganized into a higherlevel communication library with some modifications to the Gadget-2 code. We present the implementation and evaluation of one such message passing pattern (or schedule) that we term Collective Asynchronous Remote Invocation (CARI). As the name suggests, CARI is a collective variant of Remote Method Invocation (RMI), which is an attractive, high-level, and established paradigm in distributed systems programming. The CARI API might be implemented in several ways—we develop and evaluate two versions of this API on a compute cluster. The performance evaluation reveals that CARI versions of the Gadget-2 code perform as well as the original Gadget-2 code but the level of abstraction is raised considerably

    Towards a Java Environment for SPMD Programming

    Get PDF
    As a relatively straightforward object-oriented language, Java is a plausible basis for a scientific parallel programming language. We outline a conservative set of language extensions to support this kind of programming. The programming style advocated is Single Program Multiple Data (SPMD), with parallel arrays added as language primitives. Communications involving distributed arrays are handled through a standard library of collective operations. Because the underlying programming model is SPMD programming, direct calls to other communication packages are also possible from this language

    A General Synthesis of Tris-Indole Derivatives as Potential Iron Chelators

    Get PDF
    The development of a novel route for the synthesis of a new class of compounds is described. The first tripodal, tris-indole amines are prepared by straightforward routes

    MPJ: MPI-like message passing for Java

    Get PDF

    Device level communication libraries for high‐performance computing in Java

    Get PDF
    This is the peer reviewed version of the following article: Taboada, G. L., Touriño, J. , Doallo, R. , Shafi, A. , Baker, M. and Carpenter, B. (2011), Device level communication libraries for high‐performance computing in Java. Concurrency Computat.: Pract. Exper., 23: 2382-2403. doi:10.1002/cpe.1777, which has been published in final form at https://doi.org/10.1002/cpe.1777. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions.[Abstract] Since its release, the Java programming language has attracted considerable attention from the high‐performance computing (HPC) community because of its portability, high programming productivity, and built‐in multithreading and networking support. As a consequence, several initiatives have been taken to develop a high‐performance Java message‐passing library to program distributed memory architectures, such as clusters. The performance of Java message‐passing applications relies heavily on the communications performance. Thus, the design and implementation of low‐level communication devices that support message‐passing libraries is an important research issue in Java for HPC. MPJ Express is our Java message‐passing implementation for developing high‐performance parallel Java applications. Its public release currently contains three communication devices: the first one is built using the Java New Input/Output (NIO) package for the TCP/IP; the second one is specifically designed for the Myrinet Express library on Myrinet; and the third one supports thread‐based shared memory communications. Although these devices have been successfully deployed in many production environments, previous performance evaluations of MPJ Express suggest that the buffering layer, tightly coupled with these devices, incurs a certain degree of copying overhead, which represents one of the main performance penalties. This paper presents a more efficient Java message‐passing communications device, based on Java Input/Output sockets, that avoids this buffering overhead. Moreover, this device implements several strategies, both in the communication protocol and in the HPC hardware support, which optimizes Java message‐passing communications. In order to evaluate its benefits, this paper analyzes the performance of this device comparatively with other Java and native message‐passing libraries on various high‐speed networks, such as Gigabit Ethernet, Scalable Coherent Interface, Myrinet, and InfiniBand, as well as on a shared memory multicore scenario. The reported communication overhead reduction encourages the upcoming incorporation of this device in MPJ ExpressMinisterio de Ciencia e InnovaciĂłn; TIN2010-16735
    • 

    corecore